Nginx's speed, and how to replicate it [migrated]

Posted by Mediocre Gopher on Server Fault See other posts from Server Fault or by Mediocre Gopher
Published on 2011-11-14T22:27:52Z Indexed on 2011/11/15 1:56 UTC
Read the original article Hit count: 492

Filed under:
|

I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason.

I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test.

What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

© Server Fault or respective owner

Related posts about nginx

Related posts about erlang